Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

نویسندگان

  • Yichen Chen
  • Dongdong Ge
  • Mengdi Wang
  • Zizhuo Wang
  • Yinyu Ye
  • Hao Yin
چکیده

We show that finding a global optimal solution for the regularized Lq-minimization problem (q ≥ 1) is strongly NP-hard if the penalty function is concave but not linear in a neighborhood of zero and satisfies a very mild technical condition. This implies that it is impossible to have a fully polynomial-time approximation scheme (FPTAS) for such problems unless P = NP. This result clarifies the complexity for a large class of regularized optimization problems recently studied in the literature.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...

متن کامل

Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...

متن کامل

Rejoinder: One-step Sparse Estimates in Nonconcave Penalized Likelihood Models By

Most traditional variable selection criteria, such as the AIC and the BIC, are (or are asymptotically equivalent to) the penalized likelihood with the L0 penalty, namely, pλ(|β|) = 2λI (|β| = 0), and with appropriate values of λ (Fan and Li [7]). In general, the optimization of the L0-penalized likelihood function via exhaustive search over all subset models is an NP-hard computational problem....

متن کامل

A Smoothing Method for Sparse Optimization over Polyhedral Sets

In this paper, we investigate a class of heuristics schemes to solve the NP-hard problem of minimizing `0-norm over a polyhedral set. A wellknown approximation is to consider the convex problem of minimizing `1-norm. We are interested in nding improved results in cases where the problem in `1-norm does not provide an optimal solution to the `0-norm problem. We consider a relaxation technique us...

متن کامل

A geometric framework for nonconvex optimization duality using augmented lagrangian functions

We provide a unifying geometric framework for the analysis of general classes of duality schemes and penalty methods for nonconvex constrained optimization problems. We present a separation result for nonconvex sets via general concave surfaces. We use this separation result to provide necessary and sufficient conditions for establishing strong duality between geometric primal and dual problems...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017